1,511 research outputs found

    Investigating Perceptual Congruence Between Data and Display Dimensions in Sonification

    Get PDF
    The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback

    Mixed speech and non-speech auditory displays: impacts of design, learning, and individual differences in musical engagement

    Get PDF
    Presented at the 25th International Conference on Auditory Display (ICAD 2019) 23-27 June 2019, Northumbria University, Newcastle upon Tyne, UK.Information presented in auditory displays is often spread across multiple streams to make it easier for listeners to distinguish between different sounds and changes in multiple cues. Due to the limited resources of the auditory sense and the fact that they are often untrained compared to the visual senses, studies have tried to determine the limit to which listeners are able to monitor different auditory streams while not compromising performance in using the displays. This study investigates the difference between non-speech auditory displays, speech auditory displays, and mixed displays; and the effects of the different display designs and individual differences on performance and learnability. Results showed that practice with feedback significantly improves performance regardless of the display design and that individual differences such as active engagement in music and motivation can predict how well a listener is able to learn to use these displays. Findings of this study contribute to understanding how musical experience can be linked to usability of auditory displays, as well as the capability of humans to learn to use their auditory senses to overcome visual workload and receive important information

    Cofactor regeneration by a soluble pyridine nucleotide transhydrogenase for biological production of hydromorphone

    Get PDF
    We have applied the soluble pyridine nucleotide transhydrogenase of Pseudomonas fluorescens to a cell-free system for the regeneration of the nicotinamide cofactors NAD and NADP in the biological production of the important semisynthetic opiate drug hydromorphone. The original recombinant whole-cell system suffered from cofactor depletion resulting from the action of an NADP(+)-dependent morphine dehydrogenase and an NADH-dependent morphinone reductase. By applying a soluble pyridine nucleotide transhydrogenase, which can transfer reducing equivalents between NAD and NADP, we demonstrate with a cell-free system that efficient cofactor cycling in the presence of catalytic amounts of cofactors occurs, resulting in high yields of hydromorphone. The ratio of morphine dehydrogenase, morphinone reductase, and soluble pyridine nucleotide transhydrogenase is critical for diminishing the production of the unwanted by-product dihydromorphine and for optimum hydromorphone yields. Application of the soluble pyridine nucleotide transhydrogenase to the whole-cell system resulted in an improved biocatalyst with an extended lifetime. These results demonstrate the usefulness of the soluble pyridine nucleotide transhydrogenase and its wider application as a tool in metabolic engineering and biocatalysis

    The past, present, and promise of sonification

    Get PDF
    The use of sound to systematically communicate data has been with us for a long time, and has received considerable research, albeit in a broad range of distinct fields of inquiry. Sonification is uniquely capable of conveying series and patterns, trends and outliers…and effortlessly carries affect and emotion related to those data. And sound-either by itself or in conjunction with visual, tactile, or even olfactory representations-can make data exploration more compelling and more accessible to a broader range of individuals. Nevertheless, sonification and auditory displays still occupy only a sliver of popular mindshare: most people have never thought about using non-speech sound in this manner, even though they are certainly very familiar with other intentional uses of sound to convey status, notifications, and warnings. This article provides a brief history of sonification, introduces terms, quickly surveys a range of examples, and discusses the past, present, and as-yet unrealized future promise of using sound to expand the way we can communicate about data, broaden the use of auditory displays in society, and make science more engaging and more accessible

    Anger effects on driver situation awareness and driving performance

    Get PDF
    Research has suggested that emotional states have critical effects on various cognitive processes, which are important components of situation awareness (Endsley, 1995b). Evidence from driving studies has also emphasized the importance of driver situation awareness for performance and safety. However, to date, little research has investigated the relationship between emotional effects and driver situation awareness. In our experiment, 30 undergraduates drove in a simulator after induction of either anger or neutral affect. Results showed that an induced angry state can degrade driver situation awareness as well as driving performance as compared to a neutral state. However, the angry state did not have an impact on participants\u27 subjective judgment or perceived workload, which might imply that the effects of anger occurred below their level of conscious awareness. One of the reasons participants showed a lack of compensation for their deficits in performance might be that they were not aware of severe impacts of emotional effects on driving performance

    Sonification Mapping Configurations: Pairings Of Real-Time Exhibits And Sound

    Get PDF
    Presented at the 19th International Conference on Auditory Display (ICAD2013) on July 6-9, 2013 in Lodz, Poland.Visitors to aquariums typically rely on their vision to interact with live exhibits that convey rich descriptive and aesthetic visual information. However, some visitors may prefer or need to have an alternative interpretation of the exhibitÕs visual scene to improve their experience. Musical sonification has been explored as an interpretive strategy for this purpose and related work provides some guidance for sonification design, yet more empirical work on developing and validating the music-to-visual scene mappings needs to be completed. This paper discusses work to validate mappings that were developed through an investigation of musician performances for two specific live animal exhibits at the Georgia Aquarium. In this proposed study, participants will provide feedback on musical mapping examples which will help inform design of a real-time sonification system for aquarium exhibits. Here, we describe our motivation, methods, and expected contributions

    Evaluation of Psychoacoustic Sound Parameters for Sonification

    Get PDF
    Sonification designers have little theory or experimental evidence to guide the design of data-to-sound mappings. Many mappings use acoustic representations of data values which do not correspond with the listener's perception of how that data value should sound during sonification. This research evaluates data-to-sound mappings that are based on psychoacoustic sensations, in an attempt to move towards using data-to-sound mappings that are aligned with the listener's perception of the data value's auditory connotations. Multiple psychoacoustic parameters were evaluated over two experiments, which were designed in the context of a domain-specific problem - detecting the level of focus of an astronomical image through auditory display. Recommendations for designing sonification systems with psychoacoustic sound parameters are presented based on our results

    Web Sonification Sandbox - an Easy-to-Use Web Application for Sonifying Data and Equations

    Get PDF
    Auditory and multimodal presentation of data (“auditory graphs”) can allow for discoveries in a data set that are sometimes impossible with visual-only inspection. At the same time, multimodal graphs can make data, and the STEM fields that rely on them, more accessible to a much broader range of people, including many with disabilities. There have been a variety of software tools developed to turn data into sound, including the widely-used Sonification Sandbox, but there remains a need for simple, powerful, and more accessible tool for the construction and manipulation of multimodal graphs. Web-based audio functionality is now at the point where it can be leveraged to provide just such a tool. Thus, we developed a web application, the Web Sonification Sandbox (or simply the Web Sandbox), that allows users to create and manipulate multimodal graphs that convey information through both sonification and visualization. The Web Sandbox is designed to be usable by individuals with no technical or musical expertise, which separates it from existing software. The easy-to-use nature of the Web Sandbox, combined with its multimodal nature, allow it to be a maximally accessible application by a diverse audience of users. Nevertheless, the application is also powerful and flexible enough to support advanced users

    Mobile audio designs monkey: An audio augmented reality designer's tool

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)Audio Augmented Reality (AR) design is currently a very difficult task. To develop audio for an AR environment a designer must have technical skills which are unrelated to the design process. The designer should be focusing on the creativity, design, and the logic of the AR rather than the details of the audio. To support the design process, an audio AR designers' tool called Mobile Audio Designs (MAD) Monkey was developed. MAD Monkey was developed using the standard User Centered Design process. The stages of the iterative design process are described here, and the features of the resulting system are discussed. Evaluation of the prototype and plans for further development are also enumerated

    Navigation performance in a virtual environment with bonephones

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)Audio navigation interfaces have traditionally been studied (and implemented) using headphones. However, many potential users (especially those with visual impairments) are hesitant to adopt these emerging wayfinding technologies if doing so requires them to reduce their ability to hear environmental sounds by wearing headphones. In this study we examined the performance of the SWAN audio navigation interface using bone-conduction headphones (``bonephones''), which do not cover the ear. Bonephones enabled all participants to complete the navigation tasks with good efficiencies, though not immediately as effective as regular headphones. Given the functional success here, and considering that the spatialization routines were not optimized for bonephones (this essentially represents a worst-case scenario), the prospects are excellent for more widespread usage of bone conduction for auditory navigation, and likely for many other auditory displays
    • …
    corecore